You are on page 1of 6

Topic 3: Intelligent Agents Intelligent Agents:Overview

(Lecture notes by W.Hsu are used)


‹ Agent: Definition
‹ Today’s Reading: Chapter 2, Russell and Norvig or/and • Any entity that perceives its environment through sensors and
Luger 1.1.4 acts upon that environment through effectors
‹ Intelligent Agent (IA) Design • Examples (class discussion): human, robotic, software agents
• Shared requirements, characteristics of IAs ‹ Perception

• Methodologies • Signal from environment


Percepts
• May exceed sensory capacity Sensors
– Software agents
‹ Sensors
– Reactivity vs. state ?
• Acquires percepts
– Knowledge, inference, and uncertainty Environment Agent
• Possible limitations
‹ Intelligent Agent Frameworks ‹ Action
• Reactive • Attempts to affect environment Actions
• With state • Usually exceeds effector capacity
Effectors

• Goal-based ‹ Effectors
• Utility-based • Transmits actions
• Possible limitations

How Agents Should Act


How Agents Should Act
‹ Rational Agent: Definition
• Informal: “does the right thing, given what it believes from ‹ Why Study Rationality?
what it perceives” • Recall: aspects of intelligent behavior (last lecture)
• What is “the right thing”? – Engineering objectives: optimization, problem solving,
– First approximation: action that maximizes success of decision support
agent – Scientific objectives: modeling correct inference, learning,
– Limitations to this definition? planning
• Issues to be addressed now • Rational cognition: formulating plausible beliefs, conclusions
– How to evaluate success • Rational action: “doing the right thing” given beliefs
– When to evaluate success
• Issues to be addressed later in this course
– Uncertainty (in environment, in actions)
– How to express beliefs, knowledge

Rational Agents Rational Agents


‹ “Doing the Right Thing” ‹ Agent Capabilities: Requirements
• Committing actions • Choice: select actions (and carry them out)
– Limited to set of effectors • Knowledge: represent knowledge about environment
– In context of what agent knows • Perception: capability to sense environment
• Specification (cf. software specification) • Criterion: performance measure to define degree of
success
– Preconditions, postconditions of operators
‹ Possible Additional Capabilities
– Caveat: not always perfectly known (for given
environment) • Memory (internal model of state of the world)
– Agent may also have limited knowledge of • Knowledge about effectors, reasoning process (reflexive
specification reasoning)

1
Measuring Performance Measuring Performance
• Example: web crawling agent
‹ Performance Measure: How to Determine Degree of
Sucesss – Number of hits
• Definition: criteria that determine how successful agent – Number of relevant hits
is – Ratio of relevant hits to pages explored, resources
• Clearly, varies over expended
– Agents – Caveat: “you get what you ask for” (issues:
redundancy, etc.)
– Environments
‹ When to Evaluate Success
• Possible measures?
• Depends on objectives (short-term efficiency,
– Subjective (agent may not have capability to give
consistency, etc.)
accurate answer!)
• Is task episodic? Are there milestones?
– Objective: outside observation
Reinforcements? (e.g., games)

Knowledge in Agents Knowledge in Agents


‹ Rationality versus Omniscience
• Nota Bene (NB): not the same
• Distinction • Related questions
– Omniscience: knowing actual outcome of all actions – How can an agent make rational decisions given
– Rationality: knowing plausible outcome of all actions beliefs about outcomes of actions?
– Example: is crossing the street to greet a friend too – Specifically, what does it mean (algorithmically) to
risky? “choose the best”?
• Key question in AI ‹ Limitations of Rationality
– What is a plausible outcome? • Based only on what agent can perceive and do
– Especially important in knowledge-based expert • Based on what is “likely” to be right, not what “turns
systems out” to be right
– Of practical important in planning, machine learning

What Is Rational? What Is Rational?


‹ Criteria ‹ Agent Knowledge
• Determines what is rational at any given time • Of environment – “required”
• Varies with agent, environment, situation • Of self (reflexive reasoning)
‹ Performance Measure ‹ Feasible Action
• Specified by outside observer or evaluator • What can be performed
• Applied (consistently) to (one or more) IAs in given • What agent believes it can attempt?
environment
‹ Percept Sequence
• Definition: entire history of percepts gathered by agent
• NB: may or may not be retained fully by agent (issue:
state and memory)

2
Ideal Rationality Ideal Rationality
‹ Ideal Rational Agent ‹ Ideal Mapping from Percepts to Actions
• Given: any possible percept sequence • Figure 2.2, R&N
• Do: ideal rational behavior • Mapping p: percept sequence → action
– Whatever action is expected to maximize – Representing p as list of pairs: infinite (unless
performance measure explicitly bounded)
– NB: expectation – informal sense (for now); – Using p: specifies ideal mapping from percepts to
mathematical foundation soon actions (i.e., ideal agent)
• Basis for action – Finding explicit p: in principle, could use trial and
– Evidence provided by percept sequence error
– Built-in knowledge possessed by the agent – Other (implicit) representations may be easier to
acquire!

Autonomy Autonomy[2]
‹ Built-In Knowledge
• What if agent ignores percepts?
‹ Justificiation for Autonomous Agents
• Possibility
• Sound engineering practice: “Intelligence demands
– All actions based on agent’s own knowledge robustness, adaptivity”
– Agent said to lack autonomy • This course: mathematical and CS basis of autonomy
• Examples in IAs
– “Preprogrammed” or “hardwired” industrial robots
– Clocks
– Other sensorless automata
– NB: to be distinguished from closed versus open
loop systems

Structure of Intelligent Agents Example:Automated Taxi Driver


‹ Agent Behavior
• Given: sequence of percepts ‹ Agent Type: Taxi Driver
• Return: IA’s actions ‹ Percepts
• Visual: cameras
– Simulator: description of results of actions
• Profilometer: speedometer, tachometer, odometer
– Real-world system: committed action
• Other: GPS, sonar, interactive (microphone)
‹ Agent Programs
‹ Actions
• Functions that implement program
• Steer, accelerate, brake
• Assumed to run in computing environment
• Talk to passenger
(architecture)
– Hardware architecture: computer organization
– Software architecture: programming languages,
operating systems
• Agent = architecture + program

3
Example:Automated Taxi Driver[2] Agent Framework:
‹ Goals
Simple Reflex Agents [1]
• Safe, legal, fast, comfortable
• Maximize profits
Agent Sensors
‹ Environment
• Roads

Environment
• Other traffic, pedestrians
• Customers
‹ Discussion: Performance Requirements for Open Ended
Task
Condition-Action What action I
Rules should do now

Effectors

Agent Framework: Agent Framework:


Simple Reflex Agents [2] Simple Reflex Agents [3]
‹ Advantages
‹ Implementation and Properties
• Selection of best action based only on current state of
• Instantiation of generic skeleton agent: Figure 2.8 world and rules
• function SimpleReflexAgent (percept) returns action • Simple, very efficient
– static: rules, set of condition-action rules • Sometimes robust
– state ← Interpret-Input (percept)
‹ Limitations and Disadvantages
– rule ← Rule-Match (state, rules)
• No memory (doesn’t keep track of world)
– action ← Rule-Action {rule}
• Limits range of applicability
– return action

Agent Frameworks: Agent Frameworks:


(Reflex) Agents with State [1] (Reflex) Agents with State [2]
‹ Implementation and Properties
• Instantiation of generic skeleton agent: Figure 2.10
Agent Sensors
• function ReflexAgentWithState (percept) returns action
State – static: state, description of current world state;
Environment

What world is rules, set of condition-action rules


How world evolves
– state ← Update-State (state, percept)
like now
What my actions do
– rule ← Rule-Match (state, rules)
– action ← Rule-Action {rule}
Condition-Action What action I
Rules should do now – return action

Effectors

4
Agent Frameworks: Agent Frameworks:
(Reflex) Agents with State [3] Goal-Based Agents [1]
‹ Advantages
• Selection of best action based only on current state of Agent Sensors

world and rules


State What world is
• Able to reason over past states of world

Environment
like now

• Still efficient, somewhat more robust How world evolves


What it will be
‹ Limitations and Disadvantages What my actions do like if I do
action A
• No way to express goals and preferences relative to
goals Goals
What action I
should do now
• Still limited range of applicability
Effectors

Agent Frameworks: Agent Frameworks:


Goal-Based Agents [2] Goal-Based Agents [3]
‹ Advantages
‹ Implementation and Properties • Able to reason over goal, intermediate, and initial states
• Instantiation of generic skeleton agent: Figure 2.11 • Basis: automated reasoning
• Functional description – One implementation: theorem proving (first-order
– Chapter 13: classical planning logic)
– Requires more formal specification – Powerful representation language and inference
mechanism
‹ Limitations and Disadvantages
• Efficiency limitations: can’t feasible solve many general
problems
• No way to express preferences

Agent Frameworks: Agent Frameworks:


Utility-Based Agents [1] Utility-Based Agents [2]
‹ Implementation and Properties
• Instantiation of generic skeleton agent: Figure 2.8
Agent Sensors
• function SimpleReflexAgent (percept) returns action
– static: rules, set of condition-action rules
What world is
State like now
Environment

How world evolves – state ← Interpret-Input (percept)


What it will be
like if I do A – rule ← Rule-Match (state, rules)
What my actions do
How happy will – action ← Rule-Action (rule)
I be
– return action
Utility
What action I
should do now

Effectors

5
Agent Frameworks: Looking Ahead: Search
Utility-Based Agents [3] Solving Problems by Searching
• Problem solving agents: design, specification,
implementation
‹ Advantages
• Selection of best action based only on current state of • Specification components
world and rules – Problems – formulating well-defined ones
• Simple, very efficient
– Solutions – requirements, constraints
• Sometimes robust
‹ Limitations and Disadvantages • Measuring performance
• No memory (doesn’t keep track of world) ‹ Formulating Problems as (State Space) Search
• Limits range of applicability
‹ Data Structures Used in Search

Problem-Solving Agents [1]: Problem-Solving Agents [2]:


Preliminary Design Preliminary Design
‹ Justification
• Rational IAs: act to reach environment that maximizes ‹ Problem Formulation
performance measure • Given: initial state, desired goal, specification of actions
• Need to formalize, operationalize this definition • Find: achievable sequence of states (actions) mapping
‹ Practical Issues from initial to goal state
• Hard to find appropriate sequence of states ‹ Search
• Difficult to translate into IA design • Actions: cause transitions between world states (e.g.,
applying effectors)
‹ Goals
• Typically specified in terms of finding sequence of
• Chapter 2, R&N: simplifies task of translating agent
states (operators)
specification to formal design
• First step in problem solving: formulation of goal(s) –
“accept no substitutes”
• Chapters 3-4, R&N: goal ≡ {world states | goal test is
satisfied}

Problem-Solving Agents [1]:Specification


Problem-Solving Agents [2]:
‹ Input: Informal Objectives; Initial, Intermediate, Goal
States; Actions Specification
‹ Output
• Path from initial to goal state ‹ Operational Requirements
• Leads to design requirements for state space search • Search algorithm to find path
problem • Objective criterion: minimum cost (this and next 3
‹ Logical Requirements lectures)
• States: representation of state of world (example: ‹ Environment
starting city, graph representation of Romanian map) • Agent can search in environment according to
• Operators: descriptors of possible actions (example: specifications
moving to adjacent city) • Sometimes has full state and action descriptors;
• Goal test: state → boolean (example: at destination sometimes not!
city?)
• Path cost: based on search, action costs (example:
number of edges traversed)

You might also like