You are on page 1of 24

Agents

Agents
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. So an agent as a system maps percept sequence to actions. An agent may have goals. A performance measure is required to evaluate an agent. Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators Robotic agent: cameras and infrared range finders for sensors; various motors for actuators

Agents and environments

The agent function maps from percept histories to actions: [f: P* A] The agent program runs on the physical architecture to produce f agent = architecture + program

Example :Vacuum-cleaner world

Percepts: location and contents, e.g., [A,Dirty] Actions: Left, Right, Suck, NoOp input{tables/vacuum-agent-function-table}

Sensors and Effectors


An agent perceives its environment through its sensors. The complete set of inputs at a given time is called percepts.
It can also be a sequence of percepts.

By using effectors the environment can be changed. These are called as actuators and there actions are called as actions.

Performance
Behavior and performance of an agent with respect to agent actions.
Mapping between Perception history to action mapping Ideal mapping: Specifies which action an agent should take at any time.

Performance measure: A subjective measure to characterize how successful an agent is.


Parameters : speed, time, money, accuracy

Autonomous Agent: Specifies which action to take in current situation to maximize its progress toward goal.

Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. Example Chess and Bridge Game.
Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. Example chess A deterministic but partially observable environment is Stochastic. Example Robot, Ludo If the environment is deterministic except for the actions of other agents, then the environment is strategic. E.g Chess

Environment types

Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself. In sequential environment, the agent engages in series of connected episode's.

Static (vs. dynamic): The environment is unchanged while an agent is deliberating. The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does. A Dynamic Environment changes over time independent of the actions of agent. Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. Single agent (vs. multiagent): An agent operating by itself in an environment.

Complex Environment: The environment which is: Knowledge rich, The environment has enormous amount of knowledge. Input rich, The environment has huge amount of inputs. The agent must have a way of managing these complexity.
Should have good sensing strategies and attention mechanism.

Agent Architecture
An agent is completely specified by the agent function mapping percept sequences to actions One agent function (or a small equivalence class) is rational Aim: find a way to implement the rational agent function concisely

Types of agents

Rational agents
An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful. Performance measure: An objective criterion for success of an agent's behavior E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc.

So a Rational Agent, for each possible percept sequence, it should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Table-lookup agent
input{algorithms/table-agent-algorithm} Drawbacks: Huge table. Take a long time to build the table. No autonomy. Even with learning, need a long time to learn the table entries.

Percept based agent


Information comes from sensors-percepts. Changes the agents current state of world. Triggers actions through affecters. Reactive agents and stimulus based agents. No notion of history the current state is as the sensors see it right now.

Simple reflex agents

Goal-based agents

Utility-based agents

Learning agents

What is an intelligent agent


An intelligent agent is a system that: perceives its environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or other complex environment);

reasons to interpret perceptions, draw inferences, solve problems, and determine actions; and
acts upon that environment to realize a set of goals or tasks for which it was designed. Performance measure: to make subjective measure of the performance.
input/ sensors

user/ environment

output/ effectors

Intelligent Agent

What an intelligent agent can do

An intelligent agent can :


collaborate with its user to improve the accomplishment of his or her tasks;
carry out tasks on users behalf, and in doing so employs some knowledge of the user's goals or desires;

monitor events or procedures for the user;


advise the user on how to perform a task; train or teach the user;

help different users collaborate.

Ability to learn

The ability to improve its competence and efficiency.

An agent is improving its competence if it learns to solve a broader class of problems, and to make fewer mistakes in problem solving. An agent is improving its efficiency if it learns to solve more efficiently (for instance, by using less time or space resources) the problems from its area of competence.

Extended agent architecture


The learning engine implements methods for extending and refining the knowledge in the knowledge base.

Intelligent Agent
Input/ Sensors

Problem Solving Engine Learning Engine

User/ Environment

Output/

Knowledge Base
Ontology Rules/Cases/Methods

Effectors

How are agents built


Intelligent Agent Domain Expert
Dialog Programming

Knowledge Engineer

Inference Engine

Knowledge Base

Results

A knowledge engineer attempts to understand how a subject matter expert, reasons and solves problems and then encodes the acquired expertise into the agent's knowledge base. The expert analyzes the solutions generated by the agent (and often the knowledge base itself) to identify errors, and the knowledge engineer corrects the knowledge base.

Why it is hard
The knowledge engineer has to become a kind of subject matter expert in order to properly understand experts problem solving knowledge. This takes time and effort. Experts express their knowledge informally, using natural language, visual representations and common sense, often omitting essential details that are considered obvious. This form of knowledge is very different from the one in which knowledge has to be represented in the knowledge base (which is formal, precise, and complete).
This transfer and transformation of knowledge, from the domain expert through the knowledge engineer to the agent, is long, painful and inefficient (and is known as "the knowledge acquisition bottleneck of the AI systems development process).

You might also like