You are on page 1of 9

Agents in Artificial Intelligence

Artificial intelligence is defined as study of rational agents. A rational


agent could be anything which makes decisions, like a person, firm,
machine, or software. It carries out an action with the best outcome after
considering past and current percepts (agent’s perceptual inputs at a given
instance).
An AI system is composed of an agent and its environment. The agents
act in their environment. The environment may contain other agents. An
agent is anything that can be viewed as:
 perceiving its environment through sensors and

 acting upon that environment through actuators

Note: Every agent can perceive its own actions (but not always the effects)

To understand the structure of Intelligent Agents, we should be familiar


with Architecture and Agent Program. Architecture is the machinery that
the agent executes on. It is a device with sensors and actuators, for
example: a robotic car, a camera, a PC. Agent program is an
implementation of an agent function. An agent function is a map from
the percept sequence (history of all that an agent has perceived till date)
to an action.
Agent = Architecture + Agent Program
Rationality
Rationality is nothing but status of being reasonable, sensible, and having
good sense of judgment.
Rationality is concerned with expected actions and results depending
upon what the agent has perceived. Performing actions with the aim of
obtaining useful information is an important part of rationality.
What is Ideal Rational Agent?
An ideal rational agent is the one, which is capable of doing expected
actions to maximize its performance measure, on the basis of −

 Its percept sequence


 Its built-in knowledge base
Rationality of an agent depends on the following −
 The performance measures, which determine the degree of
success.
 Agent’s Percept Sequence till now.
 The agent’s prior knowledge about the environment.
 The actions that the agent can carry out.
A rational agent always performs right action, where the right action
means the action that causes the agent to be most successful in the given
percept sequence. The problem the agent solves is characterized by
Performance Measure, Environment, Actuators, and Sensors (PEAS)
Examples Of Agents:-

A software agent has Keystrokes, file contents, received network


packages which act as sensors and displays on the screen, files, sent
network packets acting as actuators.

A Human agent has eyes, ears, and other organs which act as sensors and
hands, legs, mouth, and other body parts acting as actuators.

A Robotic agent has Cameras and infrared range finders which act as
sensors and various motors acting as actuators.

TYPES OF AGENTS
Simple Reflex Agents

 They choose actions only based on the current percept.


 They are rational only if a correct decision is made only on the basis
of current precept.
 Their environment is completely observable.
Condition-Action Rule − It is a rule that maps a state (condition) to an
action.

Model Based Reflex Agents


They use a model of the world to choose their actions. They maintain an
internal state.
Model − knowledge about “how the things happen in the world”.
Internal State − It is a representation of unobserved aspects of current
state depending on percept history.
Updating the state requires the information about −

 How the world evolves.


 How the agent’s actions affect the world.
Goal Based Agents
They choose their actions in order to achieve goals. Goal-based approach
is more flexible than reflex agent since the knowledge supporting a
decision is explicitly modeled, thereby allowing for modifications.
Goal − It is the description of desirable situations.
Utility Based Agents
They choose actions based on a preference (utility) for each state.
Goals are inadequate when −
 There are conflicting goals, out of which only few can be achieved.
 Goals have some uncertainty of being achieved and you need to
weigh likelihood of success against the importance of a goal.

Types of Environment in AI

We can classify environments to predict how difficult the AI task will be.

Fully Observable

When it is possible to determine the complete state of the environment


each time your agent needs to make the optimal decision. For example, a
checkers game can be classed as fully observable, because the agent can
observe the full state of the game (how many pieces the opponent has,
how many pieces we have etc.)

Partially Observable

Contrast to fully observable environments, Agents may memory of past


decision to make the optimal choice within their environment. An
example of this could be a Poker game. The Agent may not know what
cards the opponent has and will have to make best decision based on what
cards the opponent has played.

Deterministic

Deterministic environments are where your agent's actions uniquely


determine the outcome. So for example, if we had a pawn while playing
chess and we moved that piece from A2 to A3 that would always work.
There is no uncertainty in the outcome of that move.

Stochastic

Unlike deterministic environments, there is a certain amount of


randomness involved. Using our poker game example, when a card is
dealt there is a certain amount of randomness involved in which card will
be drawn.

Discrete

In discrete environments, we have a finite amount of action choices  and


a finite amount of things that we can sense. Using our checkers example
again, there are a finite amount of board positions and a finite amount of
things we can do within the checkers environment.
Continuous

In continuous environments, many actions can be sensed by our agents.


To apply this to a medical context, a patient's temperature and blood
pressure are continuous variables, and can be sensed by medical agents
designed to capture vital signs from patients and then recommend
diagnostic action to healthcare professionals.

Benign

In benign environments, the environment has no objective by itself that


would contradict your own object. For example, when it rains it might
ruin your plans to play cricket (great game, I promise) but it doesn't rain
just because Thor (God of Thunder) doesn't want you to play cricket. It
does it through factors unrelated to your objective.

Adversarial

Adversarial environments on the other hand do get out to get you. This is
commonplace in games, such as video games, where bosses and enemies
are out to destroy your plans of getting that high score, or in chess where
an AI would be out to checkmate you.

PEAS

There are certain types of AI agents. But apart from these types, there are many
agents which are being designed and created today and they differ from each other
in some aspects and have some aspects in common too. So, to group similar types
of agents together, a system was developed which is known as PEAS system.

PEAS stands for Performance, Environment, Actuators, and Sensors. Based


on these properties of an agent, they can be grouped together or can be
differentiated from each other. Each agent has these following properties defines
for it.
Performance:
The output which we get from the agent. All the necessary results that an agent
gives after processing comes under its performance.

Environment:
All the surrounding things and conditions of an agent fall in this section. It
basically consists of all the things under which the agents work.

Actuators:
The devices, hardware or software through which the agent performs any actions
or processes any information to produce a result are the actuators of the agent.

Sensors:
The devices through which the agent observes and perceives its environment are
the sensors of the agent.

EXAMPLE:

Let us take an example of a self-driven car. As the name suggests, it is a car which
drives on its own, by taking all the necessary decisions while driving without any
help from the user (customer). In other words, we can say that this car drives on its
own and requires no driver. The PEAS description for this agent will be as
follows:

Performance: The performance factors for a self-driven car will be the Speed,
Safety while driving (both of the car and the user), Time is taken to drive to a
particular location, the comfort of the user, etc.

Environment: The road on which the Car is being driven, other cars present on the
road, pedestrians, crossings, road signs, traffic signals, etc. , all act as its
environment.

Actuators: All those devices through which the control of the car is handled, are
the actuators of the car. For example, the Steering, Accelerator, Breaks, Horn,
Music system, etc.

Sensors: All those devices through which the car gets an estimate about its
surroundings and it can draw certain perceptions out of it are its sensors. For
example, Camera, Speedometer, GPS, Odometer, Sonar, etc.

You might also like